6 research outputs found

    Parallel Evidence-Based Indexing of Complex Three-Dimensional Models Using Prototypical Parts and Relations (Dissertation Proposal)

    Get PDF
    This proposal is concerned with three-dimensional object recognition from range data using superquadric primitives. Superquadrics are a family of parametric shape models which represent objects at the part level and can account for a wide variety of natural and man-made forms. An integrated framework for segmenting dense range data of complex 3-D objects into their constituent parts in terms of bi-quadric surface patches and superquadric shape primitives is described in [29]. We propose a vision architecture that scales well as the size of its model database grows. Following the recovery of superquadric primitives from the input depth map, we split the computation into two concurrent processing streams. One is concerned with the classification of individual parts using viewpoint-invariant shape information while the other classifies pairwise part relationships using their relative size, orientation and type of joint. The major contribution of this proposal lies in a principled solution to the very difficult problems of superquadric part classification and model indexing. The problem is how to retrieve the best matched models without exploring all possible object matches. Our approach is to cluster together similar model parts to create a reasonable number of prototypical part classes (protoparts). Each superquadric part recovered from the input is paired with the best matching protopart using precomputed class statistics. A parallel, theoretically-well grounded evidential recognition algorithm quickly selects models consistent with the classified parts. Classified part relations (protorelations) are used to further reduce the number of consistent models and remaining ambiguities are resolved using sequential top-down search

    Three Highly Parallel Computer Architectures and Their Suitability for Three Representative Artificial Intelligence Problems

    Get PDF
    Virtually all current Artificial Intelligence (AI) applications are designed to run on sequential (von Neumann) computer architectures. As a result, current systems do not scale up. As knowledge is added to these systems, a point is reached where their performance quickly degrades. The performance of a von Neumann machine is limited by the bandwidth between memory and processor (the von Neumann bottleneck). The bottleneck is avoided by distributing the processing power across the memory of the computer. In this scheme the memory becomes the processor (a smart memory ). This paper highlights the relationship between three representative AI application domains, namely knowledge representation, rule-based expert systems, and vision, and their parallel hardware realizations. Three machines, covering a wide range of fundamental properties of parallel processors, namely module granularity, concurrency control, and communication geometry, are reviewed: the Connection Machine (a fine-grained SIMD hypercube), DADO (a medium-grained MIMD/SIMD/MSIMD tree-machine), and the Butterfly (a coarse-grained MIMD Butterflyswitch machine)

    Efficient Evidential Indexing of Three-Dimensional Models using Prototypical Parts

    No full text
    Efficient Evidential Indexing of Three-Dimensional Models using Prototypical Parts Ron Katriel Supervised by Lokendra Shastri This thesis is concerned with efficient recognition of three-dimensional (3-D) objects using parametric part descriptions. The parametric shape models used are superquadrics, as recovered from depth data. The primary contribution of our research lies in a principled solution to the difficult problems of object part classification and model indexing. The novelty of our approach is in the use of a formal statistical approach for superquadric part classification and a formal evidential framework for model indexing. In addition, the method used for model indexing is amenable to the use of massive parallelism using a connectionist implementation of evidential semantic networks. A major concern in practical vision systems is how to retrieve the best matched models without exploring all possible object matches. Our approach is to cluster together similar model parts t..

    Asymmetric Classification: Constructing Channels from Sources in Real-Time

    No full text
    In this note we describe a system, developed under contract for major newspaper and newswire publishers (and currently deployed commercially), that constructs "topic channels" in real-time by simultaneously assigning one or more category codes to newspaper stories immediately upon publication and to newswire stories "on the run". The sources are diverse, but the category code taxonomy (developed by human domain experts) is unified. The system, named "Cogent" (COdinG ENgine Technology), is asymmetric in the following two senses: (1) speed of classification is far more important than speed of training, and (2) precision is far more important than recall. The last two statements must be taken in the extreme: a failure to classify in under a few milliseconds, or the inclusion of an irrelevant story in a channel, are both considered system failures of the first magnitude, not mere "glitches". The approach is statistical, and the thresholdadjustment used to favor precision over recall has direct interpretation as a likelihood ratio. Novel aspects include a new feature selection algorithm that drastically reduces dimensionality, and the use of publisherassigned metadata as features. Comparison with published results indicate that Cogent performs as well as the best available text categorizers for newswires but uses substantially fewer features and computational resources during classification
    corecore